technical solution
Trustworthy AI Must Account for Interactions
Trustworthy AI encompasses many aspirational aspects for aligning AI systems with human values, including fairness, privacy, robustness, explainability, and uncertainty quantification. Ultimately the goal of Trustworthy AI research is to achieve all aspects simultaneously. However, efforts to enhance one aspect often introduce unintended trade-offs that negatively impact others. In this position paper, we review notable approaches to these five aspects and systematically consider every pair, detailing the negative interactions that can arise. For example, applying differential privacy to model training can amplify biases, undermining fairness. Drawing on these findings, we take the position that current research practices of improving one or two aspects in isolation are insufficient. Instead, research on Trustworthy AI must account for interactions between aspects and adopt a holistic view across all relevant axes at once. To illustrate our perspective, we provide guidance on how practitioners can work towards integrated trust, examples of how interactions affect the financial industry, and alternative views.
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (1.00)
Ministers block Lords bid to make AI firms declare use of copyrighted content
The government stripped the transparency amendment, which was backed by peers in the bill's reading in the House of Lords last week, out of the draft text by invoking financial privilege, meaning there is no budget available for new regulations, during a Commons debate on Wednesday afternoon. There were 297 MPs who voted in favour of removing the amendment, while 168 opposed. The data protection minister, Chris Bryant, told MPs that although he recognised that for many in the creative industries this "feels like an apocalyptic moment", he did not think the transparency amendment delivered the required solutions, and he argued that changes needed to be completed "in the round and not just piecemeal". Lady Kidron said: "The government failed to answer its own backbenchers who repeatedly asked'if not now then when?' and the minister replied with roundtable reviews and spurious problems about technical solutions. It is for government to set the laws and incentivise companies to obey it not run roundtables trying to work out technical solutions that they are not fit to provide. "It is astonishing that a Labour government would abandon the labour force of an entire sector.
- Government > Regional Government > Europe Government > United Kingdom Government (0.57)
- Information Technology > Security & Privacy (0.56)
- Law > Statutes (0.52)
Robustness and Cybersecurity in the EU Artificial Intelligence Act
Nolte, Henrik, Rateike, Miriam, Finck, Michèle
The EU Artificial Intelligence Act (AIA) establishes different legal principles for different types of AI systems. While prior work has sought to clarify some of these principles, little attention has been paid to robustness and cybersecurity. This paper aims to fill this gap. We identify legal challenges and shortcomings in provisions related to robustness and cybersecurity for high-risk AI systems (Art. 15 AIA) and general-purpose AI models (Art. 55 AIA). We show that robustness and cybersecurity demand resilience against performance disruptions. Furthermore, we assess potential challenges in implementing these provisions in light of recent advancements in the machine learning (ML) literature. Our analysis informs efforts to develop harmonized standards, guidelines by the European Commission, as well as benchmarks and measurement methodologies under Art. 15(2) AIA. With this, we seek to bridge the gap between legal terminology and ML research, fostering a better alignment between research and implementation efforts.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > United States (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (6 more...)
- Research Report (1.00)
- Overview (0.67)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Robot Detection System 1: Front-Following
Front-following is more technically difficult to implement than the other two human following technologies, but front-following technology is more practical and can be applied in more areas to solve more practical problems. Front-following technology has many advantages not found in back-following and side-by-side technologies. In this paper, we will discuss basic and significant principles and general design idea of this technology. Besides, various of novel and special useful methods will be presented and provided. We use enough beautiful figures to display our novel design idea. Our research result is open source in 2018, and this paper is just to expand the research result propagation granularity. Abundant magic design idea are included in this paper, more idea and analyzing can sear and see other paper naming with a start of Robot Design System with Jinwei Lin, the only author of this series papers.
AI Explainability at the IHM Conference 2022 at UNamur: Misdirection of XAI from technical solutions to user adaptation
On the first day, I attended the workshop on AI Explainability that brought together researchers from both the HCI and Computer Science communities. The workshop was opened by UNamur professors Bruno Dumas, specializing in HCI, and Benoît Frénay who works on Machine Learning. Dr Frénay presented the XAI research field and the interdisciplinary research being conducted at UNamur on this topic. He pointed out the lack of a user-centered approach in the XAI machine learning community where less than 1% of accepted papers in major conferences, such as NeurIPS, test their XAI methods with user studies. The rest of the morning was devoted to the presentation of eight abstracts, including mine, related to XAI research with either a computer science or HCI angle.
Neighborhood Watch
Vinton G. Cerf wonders "whether there is any possibility of establishing'watcher networks'" in his October 2022 Communications "Cerf's Up" column. I must point out to all who have the same concern about "who will watch the watchers" that Philip K. Dick describes this problem in his story The Minority Report (see wikipedia https://bit.ly/2XlQcSA) It works often, but not always. So the question arises: How much authority are we willing to provide for AI and is the concept of three AIs working independently on the same problem a feasible solution. I agree with Cerf that we need to come up with a solution before the problem overwhelms us. In the December 2022 Communications, there is a compelling column by Vinton G. Cerf, "On Truth and Belief," which exemplifies the growing worry about agreement, polarization, and the nature of truth.
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
THE CONVERSATION: Defining what's ethical in artificial intelligence needs input from Africans
But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Google's Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a 2018 paper Gebru and another researcher, Joy Buolamwini, had showed how facial recognition software was less accurate in identifying women and people of colour than white men. Biases in training data can have far-reaching and unintended effects.
What Is Continuous Innovation And Why It Matters - FourWeekMBA
That is a process that requires a continuous feedback loop to develop a valuable product and build a viable business model. Continuous innovation is a mindset where products and services are designed and delivered to tune them around the customers' problem and not the technical solution of its founders. On FourWeekMBA I had Ash Maurya explain why continuous innovation matters so much. Let' start from the key principles! One of the biases that that many entrepreneurs fall run into is this premature love of the solution.
Want to develop ethical AI? Then we need more African voices
Artificial intelligence (AI) was once the stuff of science fiction. It is used in mobile phone technology and motor vehicles. But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Google's Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies.
Defining what's ethical in artificial intelligence needs input from Africans
Artificial intelligence (AI) was once the stuff of science fiction. It is used in mobile phone technology and motor vehicles. But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Google's Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies.